Partially Monotonic Learning for Neural Networks
نویسندگان
چکیده
In the past decade, we have witnessed widespread adoption of Deep Neural Networks (DNNs) in several Machine Learning tasks. However, many critical domains, such as healthcare, finance, or law enforcement, transparency is crucial. particular, lack ability to conform with prior knowledge greatly affects trustworthiness predictive models. This paper contributes DNNs by promoting monotonicity. We develop a multi-layer learning architecture that handles subset features dataset that, according knowledge, monotonic relation response variable. use two alternative approaches: (i) imposing constraints on model’s parameters, and (ii) applying an additional component loss function penalises non-monotonic gradients. Our method evaluated classification regression tasks using datasets. model able known relations, improving decision making, while simultaneously maintaining small controllable degradation ability.
منابع مشابه
Reinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملSolving Partially Observable Reinforcement Learning Problems with Recurrent Neural Networks
In partially observable environments effective reinforcement learning (RL) is still a fairly open question. Most common algorithms fail to produce good results for those problems. However, many real-world applications are characterized by those difficult environments. In this paper we propose the application of recurrent neural networks (RNN) to identify in a first step the complete state space...
متن کاملStochastic Neural Networks with Monotonic Activation Functions
We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise. This paper investigates the application of this stochastic approximation in training a family of Restricted Boltzmann Machines (RBM) that are closely linked to Bregman divergences. This family, that we call exponential family RBM (Exp-RBM), is a subset of t...
متن کاملcompare learning function in neural networks for river runoff modeling
accurate prediction of river flow is one of the most important factors in surface water recourses management especially during floods and drought periods. in fact deriving a proper method for flow forecasting is an important challenge in water resources management and engineering. although, during recent decades, some black box models based on artificial neural networks (ann), have been develop...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-74251-5_2